42 research outputs found

    Influences on proxemic behaviors in human-robot interaction

    Full text link
    Abstract — As robots enter the everyday physical world of people, it is important that they abide by society’s unspoken social rules such as respecting people’s personal spaces. In this paper, we explore issues related to human personal space around robots, beginning with a review of the existing literature in human-robot interaction regarding the dimensions of people, robots, and contexts that influence human-robot interactions. We then present several research hypotheses which we tested in a controlled experiment (N=30). Using a 2 (robotics experi-ence vs. none: between-participants) x 2 (robot head oriented toward a participant’s face vs. legs: within-participants) mixed design experiment, we explored the factors that influence proxemic behavior around robots in several situations: (1) people approaching a robot, (2) people being approached by an autonomously moving robot, and (3) people being approached by a teleoperated robot. We found that personal experience with pets and robots decreases a person’s personal space around robots. In addition, when the robot’s head is oriented toward the person’s face, it increases the minimum comfortable distance for women, but decreases the minimum comfortable distance for men. We also found that the personality trait of agreeableness decreases personal spaces when people approach robots, while the personality trait of neuroticism and having negative attitudes toward robots increase personal spaces when robots approach people. These results have implications for both human-robot interaction theory and design. I

    What do Collaborations with the Arts Have to Say About Human-Robot Interaction?

    Get PDF
    This is a collection of papers presented at the workshop What Do Collaborations with the Arts Have to Say About HRI , held at the 2010 Human-Robot Interaction Conference, in Osaka, Japan

    Got Info? Examining the Consequences of Inaccurate Information Systems

    Get PDF
    It is a desirable goal to balance information given to the user with the potential adverse effects on cognitive processing and perception of information systems. In this experiment, we investigated the minimum level of information accuracy necessary in an in-car information system to elicit positive behavioral and attitudinal responses from the driver. There were 60 participants, and each drove in a simulator for 25 minutes; driving performance data was automatically collected, and drivers later completed questionnaires for attitudinal data. Participants were divided into three groups of drivers: a group driving with a 100% accurate system, another driving with a 70% accurate system, and one group driving without an in-car system. There was a definite positive effect on driving performance with the in-car system, and results show that decreasing the accuracy of the system decreases both the driving performance and the trust of the in-car system. Data also indicates that female drivers have a higher tolerance of inaccuracies in an in-car system; design implications are discussed

    Gesture2Path: Imitation Learning for Gesture-aware Navigation

    Full text link
    As robots increasingly enter human-centered environments, they must not only be able to navigate safely around humans, but also adhere to complex social norms. Humans often rely on non-verbal communication through gestures and facial expressions when navigating around other people, especially in densely occupied spaces. Consequently, robots also need to be able to interpret gestures as part of solving social navigation tasks. To this end, we present Gesture2Path, a novel social navigation approach that combines image-based imitation learning with model-predictive control. Gestures are interpreted based on a neural network that operates on streams of images, while we use a state-of-the-art model predictive control algorithm to solve point-to-point navigation tasks. We deploy our method on real robots and showcase the effectiveness of our approach for the four gestures-navigation scenarios: left/right, follow me, and make a circle. Our experiments indicate that our method is able to successfully interpret complex human gestures and to use them as a signal to generate socially compliant trajectories for navigation tasks. We validated our method based on in-situ ratings of participants interacting with the robots.Comment: 8 pages, 12 figure

    Facial expression analysis for predicting unsafe driving behavior

    Get PDF
    Abstract-Pervasive computing provides an ideal framework for active driver support systems in that context-aware systems are embedded in the car to support an ongoing human task. In the current study, we investigate how and with what success tracking driver facial features can add to the predictive accuracy of driver assistance systems. Using web cameras and a driving simulator, we captured facial expressions and driving behaviors of 49 participants while they drove a scripted 40 minute course. We extracted key facial features of the drivers using a facial recognition software library and trained machine learning classifiers on the movements of these facial features and the outputs from the car. We identified key facial features associated with driving accidents and evaluated their predictive accuracy at varying pre-accident intervals, uncovering important temporal trends. We also discuss implications for real life driver assistance systems

    Help me help you: Interfaces for personal robots

    Full text link
    Index Terms—HRI, mobile user interface, information theor

    Help me help you: Interfaces for personal robots

    Get PDF
    Index Terms-HRI, mobile user interface, information theory I. RESEARCH PROBLEM AND A PROPOSAL The communication bottleneck between robots and people People are adept at compensating for communication limitations, changing their communicative strategies for talking to pets, babies We propose to approach this problem by accounting for limitations in robot abilities and taking advantage of already familiar human-computer interaction models, leveraging a communication model based upon Information Theory. Using this design perspective, we present three different mobile user interfaces that were fully developed and implemented on a PR2 (Personal Robot 2) [6] for task domains in navigation, perception, learning and manipulation. II. RELEVANT THEORIES We can observe parallels between human robot interaction and the interaction between humans and general complex autonomous systems. Sheridan's taxonomy of complex human-machine systems describes the following sequence of operations: (1) acquire information, (2) analyze and display information, (3) decide on an action, and (4) implement that action [7, p. 61]. This provides the groundwork for identifying the stages at which people and/or robots should lead. In the current projects, the personal robot autonomously completes steps 1, 2 and 4, and the person completes step 3. Thus, the user interface design must address how the robot analyzes and displays its sensor information and world model to the human, and how the human can effectively communicate desired actions to the robot. An analysis of our case studies in Sheridan's framework is displayed in Gold proposed using an Information Pipeline model for HRI that is based upon information theory [8], a mathematical model of communication developed for quantifying the amount of information that could be transported through a given channel. Schramm [9] developed a theory of communication that put these ideas into the context of two-way joint communications. This could be helpful when considering the large amount of overhead involved in encoding and decoding messages sent between people and robots. The focus of the projects in this paper was on designing interfaces that applied this theory to human-robot communication. With a robot encoding messages in a way that humans can understand and humans encoding messages in a way that robots can understand, communication is easy and effective. III. THE DESIGN SPACE AND THREE UIS The personal robot platform used throughout these projects is the PR2, and the robot behaviors are built using the Robot Operating System (ROS

    Bringing design considerations to the mobile phone and driving debate

    Get PDF
    ABSTRACT Though legislation is increasingly discouraging drivers from holding on to their mobile phones while talking, hands-free devices do not improve driver safety. We offer two design alternatives to improve driver safety in the contexts of voice-based user interfaces and mobile phone conversations in cars-side tones (auditory feedback used in landline phones) and location of speakers. In a 2 (side tone: present vs. not) x 2 (location of speakers: headphones vs. dashboard) between-participants experiment (N=48), we investigated the impact of these features upon driver experience and performance on a simulated mobile phone conversation while driving. Participants became more verbally engaged in the conversation when side tones were present, but also experienced more cognitive load. Participants drove more safely when voices were projected from the dashboard rather than from headphones. Implications for driver user interface design are discussed
    corecore